38 research outputs found
Integer Echo State Networks: Hyperdimensional Reservoir Computing
We propose an approximation of Echo State Networks (ESN) that can be
efficiently implemented on digital hardware based on the mathematics of
hyperdimensional computing. The reservoir of the proposed Integer Echo State
Network (intESN) is a vector containing only n-bits integers (where n<8 is
normally sufficient for a satisfactory performance). The recurrent matrix
multiplication is replaced with an efficient cyclic shift operation. The intESN
architecture is verified with typical tasks in reservoir computing: memorizing
of a sequence of inputs; classifying time-series; learning dynamic processes.
Such an architecture results in dramatic improvements in memory footprint and
computational efficiency, with minimal performance loss.Comment: 10 pages, 10 figures, 1 tabl
An Approach for Self-Adaptive Path Loss Modelling for Positioning in Underground Environments
This paper proposes a real-time self-adaptive approach for accurate path loss estimation in underground mines or tunnels based on signal strength measurements from heterogeneous radio communication technologies. The proposed model features simplicity of implementation. The methodology is validated in simulations and verified by measurements taken in real environments. The proposed method leverages accuracy of positioning matching the existing approaches while requiring smaller engineering efforts
Cellular Automata Can Reduce Memory Requirements of Collective-State Computing
Various non-classical approaches of distributed information processing, such
as neural networks, computation with Ising models, reservoir computing, vector
symbolic architectures, and others, employ the principle of collective-state
computing. In this type of computing, the variables relevant in a computation
are superimposed into a single high-dimensional state vector, the
collective-state. The variable encoding uses a fixed set of random patterns,
which has to be stored and kept available during the computation. Here we show
that an elementary cellular automaton with rule 90 (CA90) enables space-time
tradeoff for collective-state computing models that use random dense binary
representations, i.e., memory requirements can be traded off with computation
running CA90. We investigate the randomization behavior of CA90, in particular,
the relation between the length of the randomization period and the size of the
grid, and how CA90 preserves similarity in the presence of the initialization
noise. Based on these analyses we discuss how to optimize a collective-state
computing model, in which CA90 expands representations on the fly from short
seed patterns - rather than storing the full set of random patterns. The CA90
expansion is applied and tested in concrete scenarios using reservoir computing
and vector symbolic architectures. Our experimental results show that
collective-state computing with CA90 expansion performs similarly compared to
traditional collective-state models, in which random patterns are generated
initially by a pseudo-random number generator and then stored in a large
memory.Comment: 13 pages, 11 figure
On Effects of Compression with Hyperdimensional Computing in Distributed Randomized Neural Networks
A change of the prevalent supervised learning techniques is foreseeable in
the near future: from the complex, computational expensive algorithms to more
flexible and elementary training ones. The strong revitalization of randomized
algorithms can be framed in this prospect steering. We recently proposed a
model for distributed classification based on randomized neural networks and
hyperdimensional computing, which takes into account cost of information
exchange between agents using compression. The use of compression is important
as it addresses the issues related to the communication bottleneck, however,
the original approach is rigid in the way the compression is used. Therefore,
in this work, we propose a more flexible approach to compression and compare it
to conventional compression algorithms, dimensionality reduction, and
quantization techniques.Comment: 12 pages, 3 figure
Density Encoding Enables Resource-Efficient Randomly Connected Neural Networks
The deployment of machine learning algorithms on resource-constrained edge
devices is an important challenge from both theoretical and applied points of
view. In this article, we focus on resource-efficient randomly connected neural
networks known as Random Vector Functional Link (RVFL) networks since their
simple design and extremely fast training time make them very attractive for
solving many applied classification tasks. We propose to represent input
features via the density-based encoding known in the area of stochastic
computing and use the operations of binding and bundling from the area of
hyperdimensional computing for obtaining the activations of the hidden neurons.
Using a collection of 121 real-world datasets from the UCI Machine Learning
Repository, we empirically show that the proposed approach demonstrates higher
average accuracy than the conventional RVFL. We also demonstrate that it is
possible to represent the readout matrix using only integers in a limited range
with minimal loss in the accuracy. In this case, the proposed approach operates
only on small n-bits integers, which results in a computationally efficient
architecture. Finally, through hardware FPGA implementations, we show that such
an approach consumes approximately eleven times less energy than that of the
conventional RVFL.Comment: 7 pages, 7 figure